Transformer models have achieved superior performance in various natural language processing tasks. However, the quadratic computational cost of the attention mechanism limits its practicality for long sequences. There are existing attention variants that improve the computational efficiency, but they have limited ability to effectively compute global information. In parallel to Transformer models, state space models (SSMs) are tailored for long sequences, but they are not flexible enough to capture complicated local information. We propose SPADE, short for $\underline{\textbf{S}}$tate s$\underline{\textbf{P}}$ace $\underline{\textbf{A}}$ugmente$\underline{\textbf{D}}$ Transform$\underline{\textbf{E}}$r. Specifically, we augment a SSM into the bottom layer of SPADE, and we employ efficient local attention methods for the other layers. The SSM augments global information, which complements the lack of long-range dependency issue in local attention methods. Experimental results on the Long Range Arena benchmark and language modeling tasks demonstrate the effectiveness of the proposed method. To further demonstrate the scalability of SPADE, we pre-train large encoder-decoder models and present fine-tuning results on natural language understanding and natural language generation tasks.
translated by 谷歌翻译
电子商务查询通常简短而模棱两可。因此,查询理解通常使用查询重写来消除用户输入查询。在使用电子商务搜索工具时,用户倾向于在购买之前输入多个搜索,我们称之为上下文。这些历史搜索包含有关用户真正购物意图的上下文见解。因此,对此类上下文信息进行建模对于更好的查询重写模型至关重要。但是,现有的查询重写模型忽略了用户的历史行为,而仅考虑即时搜索查询,这通常是一个简短的字符串,提供有关真实购物意图的有限信息。我们建议一个端到端的上下文感知查询重写模型来弥合此差距,从而考虑了搜索上下文。具体而言,我们的模型使用历史记录搜索查询及其包含的单词构建了会话图。然后,我们采用图形注意机制,该机制对交叉关系进行建模并计算会话的上下文信息。随后,模型通过使用聚合网络将上下文信息与即时搜索查询组合来计算会话表示。然后将会话表示形式解码以生成重写的查询。从经验上讲,我们证明了我们方法对各种指标下最先进的方法的优越性。在从线购物平台的内部数据上,通过介绍上下文信息,我们的模型在MRR(平均值等级)指标下取得了11.6%的改善,并在HIT@16度量指标(命中率指标)下提高了20.1%使用最佳基线方法(基于变压器的模型)。
translated by 谷歌翻译
已经提出了图形神经网络(GNN)预训练方法来增强GNN的能力。具体而言,首先在大规模的未标记图上预先训练GNN,然后在单独的小标记图上进行微调,以用于下游应用程序,例如节点分类。一种流行的预训练方法是掩盖一部分边缘,并接受了GNN的培训以恢复它们。但是,这种生成方法遭受了图不匹配。也就是说,输入到GNN偏离原始图的蒙版图。为了减轻此问题,我们提出了DIP-GNN(图神经网络的歧视性预训练)。具体来说,我们训练一个发电机以恢复蒙版边缘的身份,同时,我们训练一个判别器,以区分生成的边缘与原始图的边缘。在我们的框架中,鉴别器看到的图形更好地匹配原始图,因为生成器可以恢复蒙版边缘的一部分。大规模同质和异质图的广泛实验证明了该框架的有效性。
translated by 谷歌翻译
点过程模型在现实世界应用中非常重要。在某些关键应用程序中,对点过程模型的估计涉及来自用户的大量敏感个人数据。隐私问题自然出现了现有文献中未解决的问题。为了弥合这一明显的差距,我们提出了第一个针对点过程模型的第一个一般差异私人估计程序。具体来说,我们以霍克斯的流程为例,并根据霍克斯流程的离散表示,为事件流数据引入了严格的差异隐私定义。然后,我们提出了两种差异性优化算法,可以有效地估算霍克斯流程模型,并在两个不同的设置下具有所需的隐私和公用事业保证。提供实验以支持我们的理论分析。
translated by 谷歌翻译
基于变压器的大型模型在各种自然语言处理和计算机视觉任务中表现出卓越的性能。但是,这些模型包含大量参数,这些参数将其部署限制为现实世界应用程序。为了减少模型大小,研究人员根据权重的重要性得分修剪这些模型。但是,这种分数通常在训练过程中估计在小批次上,这会由于迷你批次采样和复杂的训练动力学而产生巨大的可变性/不确定性。结果,由于这种不确定性,可以通过常用的修剪方法来修剪一些关键权重,从而使训练不稳定并受到概括。为了解决这个问题,我们提出了Platon,该问题通过对重要性估计的上限(UCB)捕获了重要性得分的不确定性。特别是,对于较低的分数但不确定性较高的权重,柏拉图倾向于保留它们并探索其能力。我们对基于自然语言的理解,问答和图像分类的几种基于变压器的模型进行了广泛的实验,以验证柏拉图的有效性。结果表明,柏拉图在不同的稀疏度水平下显着改善。我们的代码可在https://github.com/qingruzhang/platon上公开获取。
translated by 谷歌翻译
与函数近似(特别是神经网络)结合轨迹优化的最新进步在机器人系统中为学习复杂的控制策略提供了许可。尽管具有极大的灵活性,但参数化控制政策的大型神经网络造成了重大挑战。学习的神经控制政策通常超越和非平滑,这很容易引起意外或发散的机器人运动。因此,他们经常在实践中产生较差的概括性表现。为了解决这个问题,我们提出了通过轨迹优化(Veronica)为引导的对抗的正常规则学习,以学习顺利控制政策。具体地,我们所提出的方法通过稳定对输入状态的最坏情况扰动来稳定输出控制来控制神经控制政策的平滑度(本地Lipschitz连续性)。我们对机器人操纵的实验表明,我们的建议方法不仅可以提高神经政策学习的样本效率,而且还提高了对各种类型的骚乱的鲁棒性,包括传感器噪声,环境不确定性和模型不匹配。
translated by 谷歌翻译
Creating an essay based on a few given topics is a challenging NLP task. Although several effective methods for this problem, topic-to-essay generation, have appeared recently, there is still much room for improvement, especially in terms of the coverage of the given topics and the coherence of the generated text. In this paper, we propose a novel approach called TegFormer which utilizes the Transformer architecture where the encoder is enriched with domain-specific contexts while the decoder is enhanced by a large-scale pre-trained language model. Specifically, a \emph{Topic-Extension} layer capturing the interaction between the given topics and their domain-specific contexts is plugged into the encoder. Since the given topics are usually concise and sparse, such an additional layer can bring more topic-related semantics in to facilitate the subsequent natural language generation. Moreover, an \emph{Embedding-Fusion} module that combines the domain-specific word embeddings learnt from the given corpus and the general-purpose word embeddings provided by a GPT-2 model pre-trained on massive text data is integrated into the decoder. Since GPT-2 is at a much larger scale, it contains a lot more implicit linguistic knowledge which would help the decoder to produce more grammatical and readable text. Extensive experiments have shown that the pieces of text generated by TegFormer have better topic coverage and higher text coherence than those from SOTA topic-to-essay techniques, according to automatic and human evaluations. As revealed by ablation studies, both the Topic-Extension layer and the Embedding-Fusion module contribute substantially to TegFormer's performance advantage.
translated by 谷歌翻译
Referring image segmentation aims to segment the target object described by a given natural language expression. Typically, referring expressions contain complex relationships between the target and its surrounding objects. The main challenge of this task is to understand the visual and linguistic content simultaneously and to find the referred object accurately among all instances in the image. Currently, the most effective way to solve the above problem is to obtain aligned multi-modal features by computing the correlation between visual and linguistic feature modalities under the supervision of the ground-truth mask. However, existing paradigms have difficulty in thoroughly understanding visual and linguistic content due to the inability to perceive information directly about surrounding objects that refer to the target. This prevents them from learning aligned multi-modal features, which leads to inaccurate segmentation. To address this issue, we present a position-aware contrastive alignment network (PCAN) to enhance the alignment of multi-modal features by guiding the interaction between vision and language through prior position information. Our PCAN consists of two modules: 1) Position Aware Module (PAM), which provides position information of all objects related to natural language descriptions, and 2) Contrastive Language Understanding Module (CLUM), which enhances multi-modal alignment by comparing the features of the referred object with those of related objects. Extensive experiments on three benchmarks demonstrate our PCAN performs favorably against the state-of-the-art methods. Our code will be made publicly available.
translated by 谷歌翻译
Most deep-learning-based continuous sign language recognition (CSLR) models share a similar backbone consisting of a visual module, a sequential module, and an alignment module. However, due to limited training samples, a connectionist temporal classification loss may not train such CSLR backbones sufficiently. In this work, we propose three auxiliary tasks to enhance the CSLR backbones. The first task enhances the visual module, which is sensitive to the insufficient training problem, from the perspective of consistency. Specifically, since the information of sign languages is mainly included in signers' facial expressions and hand movements, a keypoint-guided spatial attention module is developed to enforce the visual module to focus on informative regions, i.e., spatial attention consistency. Second, noticing that both the output features of the visual and sequential modules represent the same sentence, to better exploit the backbone's power, a sentence embedding consistency constraint is imposed between the visual and sequential modules to enhance the representation power of both features. We name the CSLR model trained with the above auxiliary tasks as consistency-enhanced CSLR, which performs well on signer-dependent datasets in which all signers appear during both training and testing. To make it more robust for the signer-independent setting, a signer removal module based on feature disentanglement is further proposed to remove signer information from the backbone. Extensive ablation studies are conducted to validate the effectiveness of these auxiliary tasks. More remarkably, with a transformer-based backbone, our model achieves state-of-the-art or competitive performance on five benchmarks, PHOENIX-2014, PHOENIX-2014-T, PHOENIX-2014-SI, CSL, and CSL-Daily.
translated by 谷歌翻译
Accurate traffic flow prediction, a hotspot for intelligent transportation research, is the prerequisite for mastering traffic and making travel plans. The speed of traffic flow can be affected by roads condition, weather, holidays, etc. Furthermore, the sensors to catch the information about traffic flow will be interfered with by environmental factors such as illumination, collection time, occlusion, etc. Therefore, the traffic flow in the practical transportation system is complicated, uncertain, and challenging to predict accurately. This paper proposes a deep encoder-decoder prediction framework based on variational Bayesian inference. A Bayesian neural network is constructed by combining variational inference with gated recurrent units (GRU) and used as the deep neural network unit of the encoder-decoder framework to mine the intrinsic dynamics of traffic flow. Then, the variational inference is introduced into the multi-head attention mechanism to avoid noise-induced deterioration of prediction accuracy. The proposed model achieves superior prediction performance on the Guangzhou urban traffic flow dataset over the benchmarks, particularly when the long-term prediction.
translated by 谷歌翻译